29 research outputs found
Entropy-dissipation Informed Neural Network for McKean-Vlasov Type PDEs
We extend the concept of self-consistency for the Fokker-Planck equation
(FPE) to the more general McKean-Vlasov equation (MVE). While FPE describes the
macroscopic behavior of particles under drift and diffusion, MVE accounts for
the additional inter-particle interactions, which are often highly singular in
physical systems. Two important examples considered in this paper are the MVE
with Coulomb interactions and the vorticity formulation of the 2D Navier-Stokes
equation. We show that a generalized self-consistency potential controls the
KL-divergence between a hypothesis solution to the ground truth, through
entropy dissipation. Built on this result, we propose to solve the MVEs by
minimizing this potential function, while utilizing the neural networks for
function approximation. We validate the empirical performance of our approach
by comparing with state-of-the-art NN-based PDE solvers on several example
problems.Comment: Accepted to NeurIPS 202
Accelerated Stochastic ADMM with Variance Reduction
Alternating Direction Method of Multipliers (ADMM) is a popular method in
solving Machine Learning problems. Stochastic ADMM was firstly proposed in
order to reduce the per iteration computational complexity, which is more
suitable for big data problems. Recently, variance reduction techniques have
been integrated with stochastic ADMM in order to get a fast convergence rate,
such as SAG-ADMM and SVRG-ADMM,but the convergence is still suboptimal w.r.t
the smoothness constant. In this paper, we propose a new accelerated stochastic
ADMM algorithm with variance reduction, which enjoys a faster convergence than
all the other stochastic ADMM algorithms. We theoretically analyze its
convergence rate and show its dependence on the smoothness constant is optimal.
We also empirically validate its effectiveness and show its priority over other
stochastic ADMM algorithms